6 research outputs found

    A survey on real-time 3D scene reconstruction with SLAM methods in embedded systems

    Full text link
    The 3D reconstruction of simultaneous localization and mapping (SLAM) is an important topic in the field for transport systems such as drones, service robots and mobile AR/VR devices. Compared to a point cloud representation, the 3D reconstruction based on meshes and voxels is particularly useful for high-level functions, like obstacle avoidance or interaction with the physical environment. This article reviews the implementation of a visual-based 3D scene reconstruction pipeline on resource-constrained hardware platforms. Real-time performances, memory management and low power consumption are critical for embedded systems. A conventional SLAM pipeline from sensors to 3D reconstruction is described, including the potential use of deep learning. The implementation of advanced functions with limited resources is detailed. Recent systems propose the embedded implementation of 3D reconstruction methods with different granularities. The trade-off between required accuracy and resource consumption for real-time localization and reconstruction is one of the open research questions identified and discussed in this paper

    Image quantization towards data reduction: robustness analysis for SLAM methods on embedded platforms

    No full text
    PosterInternational audienceEmbedded simultaneous localization and mapping (SLAM) aims at providing real-time performances with restrictive hardware resources of advanced perception functions. Localization methods based on visible cameras include image processing functions that require frame memory management. This work reduces the dynamic range of input frame and evaluates the accuracy and robustness of real-time SLAM algorithms with quantified frames. We show that the input data can be reduced up to 62% and 75% while maintaining a similar trajectory error lower than 0.15m compared to full precision input images

    SNET, a flexible, scalable, network paradigm for manycore architectures

    No full text
    International audienceA scalable communication paradigm for manycore architectures, called SNet (Scalable NETwork), is presented. It offers a wide range of flexibility by exploring the routing paths in a dynamic way, taking into consideration the network load. It is then followed by the data transmission phase through the chosen path

    A survey on real-time 3D scene reconstruction with SLAM methods in embedded systems

    No full text
    The 3D reconstruction of simultaneous localization and mapping (SLAM) is an important topic in the field for transport systems such as drones, service robots and mobile AR/VR devices. Compared to a point cloud representation, the 3D reconstruction based on meshes and voxels is particularly useful for high-level functions, like obstacle avoidance or interaction with the physical environment. This article reviews the implementation of a visual-based 3D scene reconstruction pipeline on resource-constrained hardware platforms. Real-time performances, memory management and low power consumption are critical for embedded systems. A conventional SLAM pipeline from sensors to 3D reconstruction is described, including the potential use of deep learning. The implementation of advanced functions with limited resources is detailed. Recent systems propose the embedded implementation of 3D reconstruction methods with different granularities. The trade-off between required accuracy and resource consumption for real-time localization and reconstruction is one of the open research questions identified and discussed in this paper

    A survey on real-time 3D scene reconstruction with SLAM methods in embedded systems

    No full text
    The 3D reconstruction of simultaneous localization and mapping (SLAM) is an important topic in the field for transport systems such as drones, service robots and mobile AR/VR devices. Compared to a point cloud representation, the 3D reconstruction based on meshes and voxels is particularly useful for high-level functions, like obstacle avoidance or interaction with the physical environment. This article reviews the implementation of a visual-based 3D scene reconstruction pipeline on resource-constrained hardware platforms. Real-time performances, memory management and low power consumption are critical for embedded systems. A conventional SLAM pipeline from sensors to 3D reconstruction is described, including the potential use of deep learning. The implementation of advanced functions with limited resources is detailed. Recent systems propose the embedded implementation of 3D reconstruction methods with different granularities. The trade-off between required accuracy and resource consumption for real-time localization and reconstruction is one of the open research questions identified and discussed in this paper

    Work-in-Progress: Smart data reduction in SLAM methods for embedded systems

    No full text
    International audienceVisual-inertial simultaneous localization and mapping methods (SLAM) process and store large amounts of data based on image sequences to estimate accurate and robust real-time trajectories. Real-time performances, memory management and low power consumption are critical for embedded SLAM with restrictive hardware resources. We aim at reducing the amount of injected input data in SLAM algorithms and, thereby, the memory footprint while providing improved real-time performances. Two decimation approaches are used, constant filtering and adaptive filtering. The first one decimates input images to reduce frame rate (from 20 to 10, 7, 5 and 2 fps). The latter one uses inertial measurements to reduce the frame rate when no significant motion is detected. Applied to SLAM methods, it produces more accurate trajectories than the constant filtering approach, while further reducing the amount of injected data up to 85%. It also impacts the resource utilization by reducing up to an average of 91% the peak of memory consumption
    corecore